Descentwise inexact proximal algorithms for smooth optimization
نویسندگان
چکیده
The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i) an algorithm to compute the proximal point, (ii) a rule to stop this algorithm, (iii) an update formula for the proximal parameter. In this work we focus on (ii), when smoothness is present – so that Newton-like methods can be used for (i): we aim at giving adequate stopping rules to reach overall efficiency of the method. Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a “sufficient” decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned functions. The experiments show that combining a standard smooth optimization algorithm with the proposed inexact proximal scheme improves numerical behaviour for those problems.
منابع مشابه
Inexact Proximal Gradient Methods for Non-convex and Non-smooth Optimization
Non-convex and non-smooth optimization plays an important role in machine learning. Proximal gradient method is one of the most important methods for solving the nonconvex and non-smooth problems, where a proximal operator need to be solved exactly for each step. However, in a lot of problems the proximal operator does not have an analytic solution, or is expensive to obtain an exact solution. ...
متن کاملA Class of Inexact Variable Metric Proximal Point Algorithms
For the problem of solving maximal monotone inclusions, we present a rather general class of algorithms, which contains hybrid inexact proximal point methods as a special case and allows for the use of a variable metric in subproblems. The global convergence and local linear rate of convergence are established under standard assumptions. We demonstrate the advantage of variable metric implement...
متن کاملConvergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...
متن کاملOn the contraction-proximal point algorithms with multi-parameters
In this paper we consider the contraction-proximal point algorithm: xn+1 = αnu+λnxn+γnJβnxn, where Jβn denotes the resolvent of a monotone operator A. Under the assumption that limn αn = 0, ∑ n αn = ∞, lim infn βn > 0, and lim infn γn > 0, we prove the strong convergence of the iterates as well as its inexact version. As a result we improve and recover some recent results by Boikanyo and Morosa...
متن کاملInexact Variants of the Proximal Point Algorithm without Monotonicity
This paper studies convergence properties of inexact variants of the proximal point algorithm when applied to a certain class of nonmonotone mappings. The presented algorithms allow for constant relative errors, in the line of the recently proposed hybrid proximal-extragradient algorithm. The main convergence result extends a recent work of the second author, where exact solutions for the proxi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Comp. Opt. and Appl.
دوره 53 شماره
صفحات -
تاریخ انتشار 2012